180 research outputs found

    Initial Responses to False Positives in AI-Supported Continuous Interactions: A Colonoscopy Case Study

    Get PDF
    The use of artificial intelligence (AI) in clinical support systems is increasing. In this article, we focus on AI support for continuous interaction scenarios. A thorough understanding of end-user behaviour during these continuous human-AI interactions, in which user input is sustained over time and during which AI suggestions can appear at any time, is still missing. We present a controlled lab study involving 21 endoscopists and an AI colonoscopy support system. Using a custom-developed application and an off-the-shelf videogame controller, we record participants’ navigation behaviour and clinical assessment across 14 endoscopic videos. Each video is manually annotated to mimic an AI recommendation, being either true positive or false positive in nature. We find that time between AI recommendation and clinical assessment is significantly longer for incorrect assessments. Further, the type of medical content displayed significantly affects decision time. Finally, we discover that the participant’s clinical role plays a large part in the perception of clinical AI support systems. Our study presents a realistic assessment of the effects of imperfect and continuous AI support in a clinical scenario

    Spatio-temporal classification for polyp diagnosis

    Get PDF
    Colonoscopy remains the gold standard investigation for colorectal cancer screening as it offers the opportunity to both detect and resect pre-cancerous polyps. Computer-aided polyp characterisation can determine which polyps need polypectomy and recent deep learning-based approaches have shown promising results as clinical decision support tools. Yet polyp appearance during a procedure can vary, making automatic predictions unstable. In this paper, we investigate the use of spatio-temporal information to improve the performance of lesions classification as adenoma or non-adenoma. Two methods are implemented showing an increase in performance and robustness during extensive experiments both on internal and openly available benchmark datasets

    Polyp detection on video colonoscopy using a hybrid 2D/3D CNN

    Get PDF
    Colonoscopy is the gold standard for early diagnosis and pre-emptive treatment of colorectal cancer by detecting and removing colonic polyps. Deep learning approaches to polyp detection have shown potential for enhancing polyp detection rates. However, the majority of these systems are developed and evaluated on static images from colonoscopies, whilst in clinical practice the treatment is performed on a real-time video feed. Non-curated video data remains a challenge, as it contains low-quality frames when compared to still, selected images often obtained from diagnostic records. Nevertheless, it also embeds temporal information that can be exploited to increase predictions stability. A hybrid 2D/3D convolutional neural network architecture for polyp segmentation is presented in this paper. The network is used to improve polyp detection by encompassing spatial and temporal correlation of the predictions while preserving real-time detections. Extensive experiments show that the hybrid method outperforms a 2D baseline. The proposed architecture is validated on videos from 46 patients and on the publicly available SUN polyp database. A higher performance and increased generalisability indicate that real-world clinical implementations of automated polyp detection can benefit from the hybrid algorithm and the inclusion of temporal information

    Exploring chemical composition and genetic dissimilarities between maize accessions

    Get PDF
    The capacity of maize (Zea mays L.) accessions to tolerate drastically extreme conditions in Iraq, contributes to thecharacterization of the genetic resources for germplasm management and the identification of the finest genotypesfor genetic improvement. Therefore, breeding maize program requires knowledge of genetic variation andgenetic structure. A total of 25 maize accessions from three regions (Iraq-Sulaimani, Iraq-Erbil and Iran-Sanandaj)were genotyped by chemical and phytochemical components and simple sequence repeats (SSR) markers to evaluategenetic diversity, population composition and the relationships between genetic and chemical compositiondissimilarities. In terms of proximate and phytochemical parameters, the maize accessions exhibited large significantdisparity, in which oil, phenol contents and 2,2-diphenyl-1-picrylhydrazyl (DPPH) characteristic appeared tobe the most discriminating features of maize accessions. Altogether, 18 SSR markers produced 77 polymorphicalleles across the 25 samples, and the chosen SSR was extremely informative with polymorphic information content(PIC) varied from 0.91 (Bnlg1890) to 0.37 (Umc1630 and Bnlg1189), as well as gene diversity (ranging from0.48 to 0.91, with an average of 0.75) illustrating the broad genetic variability of the accessions investigated.Molecular variance assessment (AMOVA) showed that there was only 21% genetic variation among populations.Pairwise PhiPT distance (0.10 to 0.31) stated high population distinctions among the populations investigated. Inaddition, the accessions from three regions were differentiated into seven clusters by both methods; clusteringand population structure analysis and the accessions are not grouped in term of geographic locations. Both chemicalcomposition and SSR markers differentiated 25 maize accessions. The results of the Mantel test exhibited asignificant positive linkage between chemical components and SSR matrices. The results of this research revealedthat maize accessions have a broad genetic diversity that provides a source of new and unique alleles that arehelpful for maize breeding programs to address the continuing and future significant challenges and determiningcollections of well-known cultivars and disparities between them

    Survey on the perceptions of UK gastroenterologists and endoscopists to artificial intelligence

    Get PDF
    Background and aims: With the potential integration of artificial intelligence (AI) into clinical practice, it is essential to understand end users’ perception of this novel technology. The aim of this study, which was endorsed by the British Society of Gastroenterology (BSG), was to evaluate the UK gastroenterology and endoscopy communities’ views on AI. Methods: An online survey was developed and disseminated to gastroenterologists and endoscopists across the UK. Results: One hundred four participants completed the survey. Quality improvement in endoscopy (97%) and better endoscopic diagnosis (92%) were perceived as the most beneficial applications of AI to clinical practice. The most significant challenges were accountability for incorrect diagnoses (85%) and potential bias of algorithms (82%). A lack of guidelines (92%) was identified as the greatest barrier to adopting AI in routine clinical practice. Participants identified real-time endoscopic image diagnosis (95%) as a research priority for AI, while the most perceived significant barriers to AI research were funding (82%) and the availability of annotated data (76%). Participants consider the priorities for the BSG AI Task Force to be identifying research priorities (96%), guidelines for adopting AI devices in clinical practice (93%) and supporting the delivery of multicentre clinical trials (91%). Conclusion: This survey has identified views from the UK gastroenterology and endoscopy community regarding AI in clinical practice and research, and identified priorities for the newly formed BSG AI Task Force

    Identifying key mechanisms leading to visual recognition errors for missed colorectal polyps using eye-tracking technology

    Get PDF
    BACKGROUND AND AIMS: Lack of visual recognition of colorectal polyps may lead to interval cancers. The mechanisms contributing to perceptual variation, particularly for subtle and advanced colorectal neoplasia, has scarcely been investigated. We aimed to evaluate visual recognition errors and provide novel mechanistic insights. METHODS: Eleven participants (7 trainees, 4 medical students) evaluated images from the UCL polyp perception dataset, containing 25 polyps, using eye tracking equipment. Gaze errors were defined as those where the lesion was not observed according to eye tracking technology. Cognitive errors occurred when lesions were observed but not recognised as polyps by participants. A video study was also performed including 39 subtle polyps, where polyp recognition performance was compared with a convolutional neural network (CNN). RESULTS: Cognitive errors occurred more frequently than gaze errors overall (65.6%) , with a significantly higher proportion in trainees (P=0.0264). In the video validation, the CNN detected significantly more polyps than trainees and medical students, with per polyp sensitivities of 79.5%, 30.0% and 15.4% respectively. CONCLUSIONS: Cognitive errors were the most common reason for visual recognition errors. The impact of interventions such as artificial intelligence, particularly on different types of perceptual errors, needs further investigation including potential effects on learning curves. To facilitate future research, a publicly accessible visual perception colonoscopy polyp database was created

    Computer aided characterization of early cancer in Barrett's esophagus on i-scan magnification imaging - Multicenter international study

    Get PDF
    BACKGROUND AND AIMS: We aimed to develop a computer aided characterization system that can support the diagnosis of dysplasia in Barrett's esophagus (BE) on magnification endoscopy. METHODS: Videos were collected in high-definition magnification white light and virtual chromoendoscopy with i-scan (Pentax Hoya, Japan) imaging in patients with dysplastic/ non-dysplastic BE (NDBE) from 4 centres. We trained a neural network with a Resnet101 architecture to classify frames as dysplastic or non-dysplastic. The network was tested on three different scenarios: high-quality still images, all available video frames and a selected sequence within each video. RESULTS: 57 different patients each with videos of magnification areas of BE (34 dysplasia, 23 NDBE) were included. Performance was evaluated using a leave-one-patient-out cross-validation methodology. 60,174 (39,347 dysplasia, 20,827 NDBE) magnification video frames were used to train the network. The testing set included 49,726 iscan-3/optical enhancement magnification frames. On 350 high-quality still images the network achieved a sensitivity of 94%, specificity of 86% and Area under the ROC (AUROC) of 96%. On all 49,726 available video frames the network achieved a sensitivity of 92%, specificity of 82% and AUROC of 95%. On a selected sequence of frames per case (total of 11,471 frames) we used an exponentially weighted moving average of classifications on consecutive frames to characterize dysplasia. The network achieved a sensitivity of 92%, specificity of 84% and AUROC of 96% The mean assessment speed per frame was 0.0135 seconds (SD, + 0.006) CONCLUSION: Our network can characterize BE dysplasia with high accuracy and speed on high-quality magnification images and sequence of video frames moving it towards real time automated diagnosis

    EFFECT OF FEEDING FREQUENCY ON THE GROWTH PERFORMANCE OF BEETAL GOAT KIDS DURING WINTER SEASON

    Get PDF
    ABSTRACT Eighteen Beetal goat kids of about same age (one month) and average weight (3.2 kg) were selected from the prevailing flock and were divided randomly into three groups with 6 replicates in each group. These kids were kept separately to study the effect of feeding frequency on the growth performance during winter season. Green fodder was offered adlibitum and concentrate was given @ 1% of the body weight to each kid. Group A (control), B and C were fed two, three and four times daily, respectively. The parameters studied were feed intake, weight gain, body measurements like height, girth and length, environmental temperature and relative humidity. There was a significant difference in the DMI (P<0.01), weight gain (P<0.05), body height (P<0.01) between treatments (feeding frequency). Body girth and body length also had a significant difference (P<0.05) for group A with B and C where as non significant results were found between kids of group B and C on fortnightly basis. The kids of group C performed well in terms of weekly body weight gain, daily dry matter intake, and body measurement as compared to group A and B

    A new artificial intelligence system successfully detects and localises early neoplasia in Barrett's esophagus by using convolutional neural networks

    Get PDF
    BACKGROUND AND AIMS: Seattle protocol biopsies for Barrett's Esophagus (BE) surveillance are labour intensive with low compliance. Dysplasia detection rates vary, leading to missed lesions. This can potentially be offset with computer aided detection. We have developed convolutional neural networks (CNNs) to identify areas of dysplasia and where to target biopsy. METHODS: 119 Videos were collected in high-definition white light and optical chromoendoscopy with i-scan (Pentax Hoya, Japan) imaging in patients with dysplastic and non-dysplastic BE (NDBE). We trained an indirectly supervised CNN to classify images as dysplastic/non-dysplastic using whole video annotations to minimise selection bias and maximise accuracy. The CNN was trained using 148,936 video frames (31 dysplastic patients, 31 NDBE, two normal esophagus), validated on 25,161 images from 11 patient videos and tested on 264 iscan-1 images from 28 dysplastic and 16 NDBE patients which included expert delineations. To localise targeted biopsies/delineations, a second directly supervised CNN was generated based on expert delineations of 94 dysplastic images from 30 patients. This was tested on 86 i-scan one images from 28 dysplastic patients. FINDINGS: The indirectly supervised CNN achieved a per image sensitivity in the test set of 91%, specificity 79%, area under receiver operator curve of 93% to detect dysplasia. Per-lesion sensitivity was 100%. Mean assessment speed was 48 frames per second (fps). 97% of targeted biopsy predictions matched expert and histological assessment at 56 fps. The artificial intelligence system performed better than six endoscopists. INTERPRETATION: Our CNNs classify and localise dysplastic Barrett's Esophagus potentially supporting endoscopists during surveillance

    Benefits and challenges in implementation of artificial intelligence in colonoscopy: World Endoscopy Organization position statement

    Get PDF
    The number of artificial intelligence (AI) tools for colonoscopy on the market is increasing with supporting clinical evidence. Nevertheless, their implementation is not going smoothly for a variety of reasons, including lack of data on clinical benefits and cost-effectiveness, lack of trustworthy guidelines, uncertain indications, and cost for implementation. To address this issue and better guide practitioners, the World Endoscopy Organization (WEO) has provided its perspective about the status of AI in colonoscopy as the position statement. WEO Position Statement: Statement 1.1: Computer-aided detection (CADe) for colorectal polyps is likely to improve colonoscopy effectiveness by reducing adenoma miss rates and thus increase adenoma detection; Statement 1.2: In the short term, use of CADe is likely to increase health-care costs by detecting more adenomas; Statement 1.3: In the long term, the increased cost by CADe could be balanced by savings in costs related to cancer treatment (surgery, chemotherapy, palliative care) due to CADe-related cancer prevention; Statement 1.4: Health-care delivery systems and authorities should evaluate the cost-effectiveness of CADe to support its use in clinical practice; Statement 2.1: Computer-aided diagnosis (CADx) for diminutive polyps (≤5 mm), when it has sufficient accuracy, is expected to reduce health-care costs by reducing polypectomies, pathological examinations, or both; Statement 2.2: Health-care delivery systems and authorities should evaluate the cost-effectiveness of CADx to support its use in clinical practice; Statement 3: We recommend that a broad range of high-quality cost-effectiveness research should be undertaken to understand whether AI implementation benefits populations and societies in different health-care systems
    corecore